Behavior of Accelerated Gradient Methods near Critical Points of Nonconvex Problems∗
نویسندگان
چکیده
We examine the behavior of accelerated gradient methods in smooth nonconvex unconstrained optimization. Each of these methods is typically a linear combination of a gradient direction and the previous step. We show by means of the stable manifold theorem that the heavyball method method does not converge to critical points that do not satisfy second-order necessary conditions. We then examine the behavior of two accelerated gradient methods — the heavy-ball method and Nesterov’s method [6] — in the vicinity of the saddle point of a nonconvex quadratic function, showing in both cases that the accelerated gradient method can diverge from this point more rapidly than steepest descent.
منابع مشابه
Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization
In many modern machine learning applications, structures of underlying mathematical models often yield nonconvex optimization problems. Due to the intractability of nonconvexity, there is a rising need to develop efficient methods for solving general nonconvex problems with certain performance guarantee. In this work, we investigate the accelerated proximal gradient method for nonconvex program...
متن کاملAccelerated Proximal Gradient Methods for Nonconvex Programming
Nonconvex and nonsmooth problems have recently received considerable attention in signal/image processing, statistics and machine learning. However, solving the nonconvex and nonsmooth optimization problems remains a big challenge. Accelerated proximal gradient (APG) is an excellent method for convex programming. However, it is still unknown whether the usual APG can ensure the convergence to a...
متن کاملAccelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent
Nesterov's accelerated gradient descent (AGD), an instance of the general family of"momentum methods", provably achieves faster convergence rate than gradient descent (GD) in the convex setting. However, whether these methods are superior to GD in the nonconvex setting remains open. This paper studies a simple variant of AGD, and shows that it escapes saddle points and finds a second-order stat...
متن کاملGeneralized Uniformly Optimal Methods for Nonlinear Programming
In this paper, we present a generic framework to extend existing uniformly optimal convex programming algorithms to solve more general nonlinear, possibly nonconvex, optimization problems. The basic idea is to incorporate a local search step (gradient descent or Quasi-Newton iteration) into these uniformly optimal convex programming methods, and then enforce a monotone decreasing property of th...
متن کاملAccelerated gradient methods for nonconvex nonlinear and stochastic programming
In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2017